Minimum divergence based discriminative training

نویسندگان

  • Jun Du
  • Peng Liu
  • Frank K. Soong
  • Jian-Lai Zhou
  • Ren-Hua Wang
چکیده

We propose to use Minimum Divergence(MD) as a new measure of errors in discriminative training. To focus on improving discrimination between any two given acoustic models, we refine the error definition in terms of Kullback-Leibler Divergence (KLD) between them. The new measure can be regarded as a modified version of Minimum Phone Error (MPE) but with a higher resolution than just a symbol matching based criterion. Experimental recognition results show the new MD based training yields relative word error rate reductions of 57.8% and 6.1% on TIDigits and Switchboard databases, respectively, in comparing with the ML trained baseline systems. The recognition performance of MD is also shown to be consistently better than that of MPE.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Divergence-based Model Separation

In this paper, a divergence-based training algorithm is proposed for model separation, where the relative divergence between models is derived from KullbackLeibler (KL) information. We attempt to improve the discriminative power of existing model while the environment-matched training data is not available. It could be applied to improve the model discrimination after model-based compensation t...

متن کامل

A regularized discriminative training method of acoustic models derived by minimum relative entropy discrimination

We present a realization method of the principle of minimum relative entropy discrimination (MRED) in order to derive a regularized discriminative training method. MRED is advantageous since it provides a Bayesian interpretations of the conventional discriminative training methods and regularization techniques. In order to realize MRED for speech recognition, we proposed an approximation method...

متن کامل

Performance of Discriminative HMM Training in Noise

In this study, discriminative HMM training and its performance are investigated in both clean and noisy environments. Recognition error is defined at string, word, phone, and acoustic levels and treated in a unified framework in discriminative training. With an acoustic level, high-resolution error measurement, a discriminative criterion of minimum divergence (MD) is proposed. Using speaker-ind...

متن کامل

Sparse Forward-Backward for Fast Training of Conditional Random Fields

Complex tasks in speech and language processing often include random variables with large state spaces, both in speech tasks that involve predicting words and phonemes, and in joint processing of pipelined systems, in which the state space can be the labeling of an entire sequence. In large state spaces, however, discriminative training can be expensive, because it often requires many calls to ...

متن کامل

A Fast Discriminative Training Algorithm for Minimum Classification Error

In this paper a new algorithm is proposed for fast discriminative training of hidden Markov models (HMMs) based on minimum classification error (MCE). The algorithm is able to train acoustic models in a few iterations, thus overcoming the slow training speed typical of discriminative training methods based on gradient-descendent. The algorithm tries to cancel the gradient of the objective funct...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006